Mapping with uncertainty representation is required in many research domains, such as localization and sensor fusion. Although there are many uncertainty explorations in pose estimation of an ego-robot with map information, the quality of the reference maps is often neglected. To avoid the potential problems caused by the errors of maps and a lack of the uncertainty quantification, an adequate uncertainty measure for the maps is required. In this paper, uncertain building models with abstract map surface using Gaussian Process (GP) is proposed to measure the map uncertainty in a probabilistic way. To reduce the redundant computation for simple planar objects, extracted facets from a Gaussian Mixture Model (GMM) are combined with the implicit GP map while local GP-block techniques are used as well. The proposed method is evaluated on LiDAR point clouds of city buildings collected by a mobile mapping system. Compared to the performances of other methods such like Octomap, Gaussian Process Occupancy Map (GPOM) and Bayersian Generalized Kernel Inference (BGKOctomap), our method has achieved higher Precision-Recall AUC for evaluated buildings.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
When simulating soft robots, both their morphology and their controllers play important roles in task performance. This paper introduces a new method to co-evolve these two components in the same process. We do that by using the hyperNEAT algorithm to generate two separate neural networks in one pass, one responsible for the design of the robot body structure and the other for the control of the robot. The key difference between our method and most existing approaches is that it does not treat the development of the morphology and the controller as separate processes. Similar to nature, our method derives both the "brain" and the "body" of an agent from a single genome and develops them together. While our approach is more realistic and doesn't require an arbitrary separation of processes during evolution, it also makes the problem more complex because the search space for this single genome becomes larger and any mutation to the genome affects "brain" and the "body" at the same time. Additionally, we present a new speciation function that takes into consideration both the genotypic distance, as is the standard for NEAT, and the similarity between robot bodies. By using this function, agents with very different bodies are more likely to be in different species, this allows robots with different morphologies to have more specialized controllers since they won't crossover with other robots that are too different from them. We evaluate the presented methods on four tasks and observe that even if the search space was larger, having a single genome makes the evolution process converge faster when compared to having separated genomes for body and control. The agents in our population also show morphologies with a high degree of regularity and controllers capable of coordinating the voxels to produce the necessary movements.
translated by 谷歌翻译
Many, if not most, systems of interest in science are naturally described as nonlinear dynamical systems (DS). Empirically, we commonly access these systems through time series measurements, where often we have time series from different types of data modalities simultaneously. For instance, we may have event counts in addition to some continuous signal. While by now there are many powerful machine learning (ML) tools for integrating different data modalities into predictive models, this has rarely been approached so far from the perspective of uncovering the underlying, data-generating DS (aka DS reconstruction). Recently, sparse teacher forcing (TF) has been suggested as an efficient control-theoretic method for dealing with exploding loss gradients when training ML models on chaotic DS. Here we incorporate this idea into a novel recurrent neural network (RNN) training framework for DS reconstruction based on multimodal variational autoencoders (MVAE). The forcing signal for the RNN is generated by the MVAE which integrates different types of simultaneously given time series data into a joint latent code optimal for DS reconstruction. We show that this training method achieves significantly better reconstructions on multimodal datasets generated from chaotic DS benchmarks than various alternative methods.
translated by 谷歌翻译
当前的电力和天然气(NG)基础设施的快速转变必须达到中世纪的二氧化碳排放量减少目标。这需要在代表性的需求和供应模式,运营限制和政策注意事项下对联合Power-NG系统进行长期计划。我们的工作是由与解决Power-NG系统联合计划的生成和传输扩展问题(GTEP)相关的计算和实际挑战所激发的。具体而言,我们专注于从相应网络中有效从功率和NG数据中提取一组代表日,并使用此组来减少解决GTEP所需的计算负担。我们为多个时间分辨率能源系统(游戏)提出了一个图形自动编码器,以捕获相互依存网络中的时空需求模式,并说明可用数据的时间分辨率的差异。所得的嵌入在聚类算法中用于选择代表日。我们评估了方法在解决新英格兰联合Power-NG系统校准的GTEP公式方面的有效性。该公式说明了功率和NG系统之间的物理相互依赖性,包括关节排放约束。我们的结果表明,从游戏中获得的代表日的集合不仅使我们能够谨慎地解决GTEP公式,而且还可以实现实施联合计划决策的较低成本。
translated by 谷歌翻译
我们将知识驱动的程序合成(KDP)作为程序综合任务的变体进行了介绍,该任务需要代理来解决一系列程序合成问题。在KDP中,代理应使用早期问题中的知识来解决后期问题。我们提出了一种基于PushGP的新方法来解决KDPS问题,该问题将子程序作为知识。所提出的方法通过偶数分区(EP)方法从先前解决的问题的解中提取子程序,并使用这些子程序使用自适应替换突变(ARM)来解决即将到来的编程任务。我们称此方法PushGP+EP+ARM。使用PushGP+EP+ARM,在知识提取和利用过程中不需要人类的努力。我们将提出的方法与PushGP进行比较,以及使用人手动提取的子程序的方法。与PushGP相比,我们的PushGP+EP+ARM可以实现更好的火车错误,成功计数和更快的收敛速度。此外,当连续解决六个程序合成问题的序列时,我们证明了PushGP+EP+组的优势。
translated by 谷歌翻译
在许多科学学科中,我们有兴趣推断一组观察到的时间序列的非线性动力学系统,这是面对混乱的行为和噪音,这是一项艰巨的任务。以前的深度学习方法实现了这一目标,通常缺乏解释性和障碍。尤其是,即使基本动力学生存在较低维的多种多样的情况下,忠实嵌入通常需要的高维潜在空间也会阻碍理论分析。在树突计算的新兴原则的推动下,我们通过线性样条基础扩展增强了动态解释和数学可牵引的分段线性(PL)复发性神经网络(RNN)。我们表明,这种方法保留了简单PLRNN的所有理论上吸引人的特性,但在相对较低的尺寸中提高了其近似任意非线性动态系统的能力。我们采用两个框架来训练该系统,一个将反向传播的时间(BPTT)与教师强迫结合在一起,另一个将基于快速可扩展的变异推理的基础。我们表明,树枝状扩展的PLRNN可以在各种动力学系统基准上获得更少的参数和尺寸,并与其他方法进行比较,同时保留了可拖动和可解释的结构。
translated by 谷歌翻译
尽管在整个科学和工程中都无处不在,但只有少数部分微分方程(PDE)具有分析或封闭形式的解决方案。这激发了有关PDE的数值模拟的大量经典工作,最近,对数据驱动技术的研究旋转了机器学习(ML)。最近的一项工作表明,与机器学习的经典数值技术的混合体可以对任何一种方法提供重大改进。在这项工作中,我们表明,在纳入基于物理学的先验时,数值方案的选择至关重要。我们以基于傅立叶的光谱方法为基础,这些光谱方法比其他数值方案要高得多,以模拟使用平滑且周期性解决方案的PDE。具体而言,我们为流体动力学的三个模型PDE开发了ML增强的光谱求解器,从而提高了标准光谱求解器在相同分辨率下的准确性。我们还展示了一些关键设计原则,用于将机器学习和用于解决PDE的数值方法结合使用。
translated by 谷歌翻译
在农业中,大多数视觉系统执行静止图像分类。然而,最近的工作强调了空间和时间提示作为改善分类绩效的丰富信息来源的潜力。在本文中,我们提出了新的方法,以明确捕获空间和时间信息,以改善深卷积神经网络的分类。我们利用可用的RGB-D图像和机器人探光仪来执行框架间特征图空间注册。然后将这些信息融合在经常学习的模型中,以提高其准确性和鲁棒性。我们证明,这可以大大提高分类性能,而我们的最佳性能时空模型(ST-ATTE)可实现4.7的相互作用(IOU [%])的绝对性能改进,用于水果,为2.6。 (甜胡椒)分割。此外,我们表明这些方法对可变的帧速率和探测器误差是可靠的,这些方法在现实世界应用中经常观察到。
translated by 谷歌翻译
磁共振成像(MRI)是中风成像的中心方式。它被用来接受患者的治疗决定,例如选择患者进行静脉溶栓或血管内治疗。随后在住院期间使用MRI来通过可视化梗塞核心大小和位置来预测结果。此外,它可以用来表征中风病因,例如(心脏) - 栓塞和非胚胎中风之间的区分。基于计算机的自动医疗图像处理越来越多地进入临床常规。缺血性中风病变分割(ISLE)挑战的先前迭代有助于生成鉴定急性和急性缺血性中风病变分割的基准方法。在这里,我们介绍了一个专家注册的多中心MRI数据集,以分割急性到亚急性中风病变。该数据集包括400个多供应商MRI案例,中风病变大小,数量和位置的可变性很高。它分为n = 250的训练数据集和n = 150的测试数据集。所有培训数据将公开可用。测试数据集将仅用于模型验证,并且不会向公众发布。该数据集是Isles 2022挑战的基础,目的是找到算法方法,以实现缺血性中风的稳健和准确分割算法的开发和基准测试。
translated by 谷歌翻译